CenterTransFuser: radar point cloud and visual information fusion for 3D object detection

نویسندگان

چکیده

Abstract Sensor fusion is an important component of the perception system in autonomous driving, and radar point cloud information camera visual can improve capability vehicles. However, most existing studies ignore extraction local neighborhood only consider shallow between two modalities based on extracted global information, which cannot perform a deep cross-modal contextual interaction. Meanwhile, data preprocessing, noise usually filtered by depth derived from image feature prediction, such methods affect accuracy branching to generate regions interest effectively filter out irrelevant points. This paper proposes CenterTransFuser model that makes full use millimeter-wave enable heterogeneous information. Specifically, new interaction called cross-transformer explored, cooperatively exploits cross-multiple attention joint mine complementary adaptive thresholding filtering method designed reduce modality-independent projected onto image. The evaluated challenging nuScenes dataset, it achieves excellent performance. Particularly, detection significantly improved for pedestrians, motorcycles, bicycles, showing superiority effectiveness proposed model.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Robust Automatic 3D Point Cloud Registration and Object Detection

In order to construct survey-grade 3D models of buildings, roads, railway stations, canals and other similar structures, the 3D environment must be fully recorded with accuracy. Following this, accurate measurements of the dimensions can be made on the recorded 3D datasets to enable 3D model extraction without having to return to the site and in significantly reduced times. The model may be com...

متن کامل

Stereo Image Point Cloud and Lidar Point Cloud Fusion for the 3d Street Mapping

Combining active and passive imaging sensors enables creating a more detailed 3D model of the real world. Then, these 3D data can be used for various applications, such as city mapping, indoor navigation, autonomous vehicles, etc. Typically, LiDAR and camera as imaging sensors are installed on these systems. Both of these sensors have advantages and drawbacks. Thus, LiDAR sensor directly provid...

متن کامل

Point Cloud Computing for Rigid and Deformable 3D Object Recognition

Machine vision is a technologically and economically important field of computer vision. It eases automatization of inspection and manipulation tasks, which in turn enables cost savings and quality improvement in industrial processes. Usually, 2D or intensity images are used for such applications. However, thanks to several technological advances, nowadays there are sensors available that allow...

متن کامل

The Object Detection Efficiency in Synthetic Aperture Radar Systems

The main purpose of this paper is to develop the method of characteristic functions for calculating the detection characteristics in the case of the object surrounded by rough surfaces. This method is to be implemented in synthetic aperture radar (SAR) systems using optimal resolution algorithms. By applying the specified technique, the expressions have been obtained for the false alarm and cor...

متن کامل

Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction

Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficien...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: EURASIP Journal on Advances in Signal Processing

سال: 2023

ISSN: ['1687-6180', '1687-6172']

DOI: https://doi.org/10.1186/s13634-022-00944-6